27 research outputs found

    Compilation for QCSP

    Get PDF
    We propose in this article a framework for compilation of quantified constraint satisfaction problems (QCSP). We establish the semantics of this formalism by an interpretation to a QCSP. We specify an algorithm to compile a QCSP embedded into a search algorithm and based on the inductive semantics of QCSP. We introduce an optimality property and demonstrate the optimality of the interpretation of the compiled QCSP.Comment: Proceedings of the 13th International Colloquium on Implementation of Constraint LOgic Programming Systems (CICLOPS 2013), Istanbul, Turkey, August 25, 201

    Scaling-up Empirical Risk Minimization: Optimization of Incomplete U-statistics

    Get PDF
    In a wide range of statistical learning problems such as ranking, clustering or metric learning among others, the risk is accurately estimated by UU-statistics of degree d1d\geq 1, i.e. functionals of the training data with low variance that take the form of averages over kk-tuples. From a computational perspective, the calculation of such statistics is highly expensive even for a moderate sample size nn, as it requires averaging O(nd)O(n^d) terms. This makes learning procedures relying on the optimization of such data functionals hardly feasible in practice. It is the major goal of this paper to show that, strikingly, such empirical risks can be replaced by drastically computationally simpler Monte-Carlo estimates based on O(n)O(n) terms only, usually referred to as incomplete UU-statistics, without damaging the OP(1/n)O_{\mathbb{P}}(1/\sqrt{n}) learning rate of Empirical Risk Minimization (ERM) procedures. For this purpose, we establish uniform deviation results describing the error made when approximating a UU-process by its incomplete version under appropriate complexity assumptions. Extensions to model selection, fast rate situations and various sampling techniques are also considered, as well as an application to stochastic gradient descent for ERM. Finally, numerical examples are displayed in order to provide strong empirical evidence that the approach we promote largely surpasses more naive subsampling techniques.Comment: To appear in Journal of Machine Learning Research. 34 pages. v2: minor correction to Theorem 4 and its proof, added 1 reference. v3: typo corrected in Proposition 3. v4: improved presentation, added experiments on model selection for clustering, fixed minor typo

    Extending Gossip Algorithms to Distributed Estimation of U-Statistics

    Get PDF
    Efficient and robust algorithms for decentralized estimation in networks are essential to many distributed systems. Whereas distributed estimation of sample mean statistics has been the subject of a good deal of attention, computation of UU-statistics, relying on more expensive averaging over pairs of observations, is a less investigated area. Yet, such data functionals are essential to describe global properties of a statistical population, with important examples including Area Under the Curve, empirical variance, Gini mean difference and within-cluster point scatter. This paper proposes new synchronous and asynchronous randomized gossip algorithms which simultaneously propagate data across the network and maintain local estimates of the UU-statistic of interest. We establish convergence rate bounds of O(1/t)O(1/t) and O(logt/t)O(\log t / t) for the synchronous and asynchronous cases respectively, where tt is the number of iterations, with explicit data and network dependent terms. Beyond favorable comparisons in terms of rate analysis, numerical experiments provide empirical evidence the proposed algorithms surpasses the previously introduced approach.Comment: to be presented at NIPS 201

    Gossip Dual Averaging for Decentralized Optimization of Pairwise Functions

    Get PDF
    In decentralized networks (of sensors, connected objects, etc.), there is an important need for efficient algorithms to optimize a global cost function, for instance to learn a global model from the local data collected by each computing unit. In this paper, we address the problem of decentralized minimization of pairwise functions of the data points, where these points are distributed over the nodes of a graph defining the communication topology of the network. This general problem finds applications in ranking, distance metric learning and graph inference, among others. We propose new gossip algorithms based on dual averaging which aims at solving such problems both in synchronous and asynchronous settings. The proposed framework is flexible enough to deal with constrained and regularized variants of the optimization problem. Our theoretical analysis reveals that the proposed algorithms preserve the convergence rate of centralized dual averaging up to an additive bias term. We present numerical simulations on Area Under the ROC Curve (AUC) maximization and metric learning problems which illustrate the practical interest of our approach

    A Semantic Characterization for ASP Base Revision

    Get PDF
    International audienceThe paper deals with base revision for Answer Set Programming (ASP). Base revision in classical logic is done by the removal of formulas. Exploiting the non-monotonicity of ASP allows one to propose other revision strategies, namely addition strategy or removal and/or addition strategy. These strategies allow one to define families of rule-based revision operators. The paper presents a semantic characterization of these families of revision operators in terms of answer sets. This semantic characterization allows for equivalently considering the evolution of syntactic logic programs and the evolution of their semantic content. It then studies the logical properties of the proposed operators and gives complexity results

    Computing Query Answering With Non-Monotonic Rules: A Case Study of Archaeology Qualitative Spatial Reasoning

    Get PDF
    International audienceThis paper deals with querying ontology-based knowledge bases equipped with non-monotonic rules through a case study within the framework of Cultural Heritage. It focuses on 3D underwater surveys on the Xlendi wreck which is represented by an OWL2 knowledge base with a large dataset. The paper aims at improving the interactions between the archaeologists and the knowledge base providing new queries that involve non-monotonic rules in order to perform qualitative spatial reasoning. To this end, the knowledge base initially represented in OWL2-QL is translated into an equivalent Answer Set Programming (ASP) program and is enriched with a set of non-monotonic ASP rules suitable to express default and exceptions. An ASP query answering approach is proposed and implemented. Furthermore due to the increased expressiveness of non-monotonic rules it provides spatial reasoning and spatial relations between artifacts query answering which is not possible with query answering languages such as SPARQL and SQWRL

    Synchronized Grammars and Primal Grammars

    No full text
    Tree languages are powerful tools for the representation and schematization of infinite sets of terms for various purposes (unification theory, verification and specification...). In order to extend the regular tree language framework, more complex formalisms have been developed. In this paper, we focus on Tree Synchronized Grammars and Primal Grammars which introduce specific control structures to represent non regular sets of terms. We propose a common unified framework in order to achieve the membership test for these particular languages. Thanks to a proof system, we provide a full operational framework, that allows us to transform tree grammars into Prolog programs (as it already exists for word grammars with DCG) whose goal is to recognize terms of the corresponding language

    New Generation Systems for Non Monotonic Reasoning

    Get PDF
    Default Logic is recognized as a powerful framework for knowledge representation and incomplete information management. Its expressive power is suitable for non monotonic reasoning, but the counterpart is its very high level of computational complexity. The purpose of this paper is to show how heuristics such as Genetic Algorithms, Ant Colony Optimization and Local Search can be used to elaborate an effcient non-monotonic reasoning system

    Possibilistic stable models

    Get PDF
    In this work, we define a new framework in order to improve the knowledge representation power of Answer Set Programming paradigm. Our proposal is to use notions from possibility theory to extend the stable model semantics by taking into account a certainty level, expressed in terms of necessity measure, on each rule of a normal logic program. First of all, we introduce possibilistic definite logic programs and show how to compute the conclusions of such programs both in syntactic and semantic ways. The syntactic handling is done by help of a fix-point operator, the semantic part relies on a possibility distribution on all sets of atoms and we show that the two approaches are equivalent. In a second part, we define what is a possibilistic stable model for a normal logic program, with default negation. Again, we define a possibility distribution allowing to determine the stable models.
    corecore